Greedy Multiple Instance Learning via Codebook Learning and Nearest Neighbor Voting

نویسندگان

  • Gang Chen
  • Jason J. Corso
چکیده

Multiple instance learning (MIL) has attracted great attention recently in machine learning community. However, most MIL algorithms are very slow and cannot be applied to large datasets. In this paper, we propose a greedy strategy to speed up the multiple instance learning process. Our contribution is two fold. First, we propose a density ratio model, and show that maximizing a density ratio function is the low bound of the DD model under certain conditions. Secondly, we make use of a histogram ratio between positive bags and negative bags to represent the density ratio function and find codebooks separately for positive bags and negative bags by a greedy strategy. For testing, we make use of a nearest neighbor strategy to classify new bags. We test our method on both small benchmark datasets and the large TRECVID MED11 dataset. The experimental results show that our method yields comparable accuracy to the current state of the art, while being up to at least one order of magnitude faster.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Application of k - Nearest Neighbor on FeatureProjections Classi er to Text

This paper presents the results of the application of an instance-based learning algorithm k-Nearest Neighbor Method on Feature Projections (k-NNFP) to text categorization and compares it with k-Nearest Neighbor Classiier (k-NN). k-NNFP is similar to k-NN except it nds the nearest neighbors according to each feature separately. Then it combines these predictions using a majority voting. This pr...

متن کامل

An LVQ-based adaptive algorithm for learning from very small codebooks

The present paper introduces an adaptive algorithm for competitive training of a nearest neighbor (NN) classifier when using a very small codebook. The new learning rule is based on the well-known LVQ method, and uses an alternative neighborhood concept to estimate optimal locations of the codebook vectors. Experiments over synthetic and real databases suggest the advantages of the learning tec...

متن کامل

Application of k Nearest Neighbor on Feature Projections Classi er to Text Categorization

This paper presents the results of the application of an instance based learning algorithm k Nearest Neighbor Method on Fea ture Projections k NNFP to text categorization and compares it with k Nearest Neighbor Classi er k NN k NNFP is similar to k NN ex cept it nds the nearest neighbors according to each feature separately Then it combines these predictions using a majority voting This prop er...

متن کامل

Key Instance Detection in Multi-Instance Learning

The goal of traditional multi-instance learning (MIL) is to predict the labels of the bags, whereas in many real applications, it is desirable to get the instance labels, especially the labels of key instances that trigger the bag labels, in addition to getting bag labels. Such a problem has been largely unexplored before. In this paper, we formulate the Key Instance Detection (KID) problem, an...

متن کامل

Nearest Neighbor Ensembles Combines with Weighted Instance and Feature Sub Set Selection: A Survey

Ensemble learning deals with methods which employ multiple learners to solve a problem The generalization ability of an ensemble is usually significantly better than that of a single learner, so ensemble methods are very attractive, at the same time feature selection process of ensemble technique has important role of classifier. This paper, presents the analysis on classification technique of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1205.0610  شماره 

صفحات  -

تاریخ انتشار 2012